Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 394
Filter
1.
Rev. bras. oftalmol ; 83: e0006, 2024. tab, graf
Article in Portuguese | LILACS-Express | LILACS | ID: biblio-1535603

ABSTRACT

RESUMO Objetivo: Obter imagens de fundoscopia por meio de equipamento portátil e de baixo custo e, usando inteligência artificial, avaliar a presença de retinopatia diabética. Métodos: Por meio de um smartphone acoplado a um dispositivo com lente de 20D, foram obtidas imagens de fundo de olhos de pacientes diabéticos; usando a inteligência artificial, a presença de retinopatia diabética foi classificada por algoritmo binário. Resultados: Foram avaliadas 97 imagens da fundoscopia ocular (45 normais e 52 com retinopatia diabética). Com auxílio da inteligência artificial, houve acurácia diagnóstica em torno de 70 a 100% na classificação da presença de retinopatia diabética. Conclusão: A abordagem usando dispositivo portátil de baixo custo apresentou eficácia satisfatória na triagem de pacientes diabéticos com ou sem retinopatia diabética, sendo útil para locais sem condições de infraestrutura.


ABSTRACT Introduction: To obtain fundoscopy images through portable and low-cost equipment using artificial intelligence to assess the presence of DR. Methods: Fundus images of diabetic patients' eyes were obtained by using a smartphone coupled to a device with a 20D lens. By using artificial intelligence (AI), the presence of DR was classified by a binary algorithm. Results: 97 ocular fundoscopy images were evaluated (45 normal and 52 with DR). Through AI diagnostic accuracy around was 70% to 100% in the classification of the presence of DR. Conclusion: The approach using a low-cost portable device showed satisfactory efficacy in the screening of diabetic patients with or without diabetic retinopathy, being useful for places without infrastructure conditions.

2.
Arq. bras. oftalmol ; 87(5): e2022, 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1527853

ABSTRACT

ABSTRACT Purpose: This study aimed to evaluate the classification performance of pretrained convolutional neural network models or architectures using fundus image dataset containing eight disease labels. Methods: A publicly available ocular disease intelligent recognition database has been used for the diagnosis of eight diseases. This ocular disease intelligent recognition database has a total of 10,000 fundus images from both eyes of 5,000 patients for the following eight diseases: healthy, diabetic retinopathy, glaucoma, cataract, age-related macular degeneration, hypertension, myopia, and others. Ocular disease classification performances were investigated by constructing three pretrained convolutional neural network architectures including VGG16, Inceptionv3, and ResNet50 models with adaptive moment optimizer. These models were implemented in Google Colab, which made the task straight-forward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of the models, the dataset was divided into 70%, 10%, and 20% for training, validation, and testing, respectively. For each classification, the training images were augmented to 10,000 fundus images. Results: ResNet50 achieved an accuracy of 97.1%; sensitivity, 78.5%; specificity, 98.5%; and precision, 79.7%, and had the best area under the curve and final score to classify cataract (area under the curve = 0.964, final score = 0.903). By contrast, VGG16 achieved an accuracy of 96.2%; sensitivity, 56.9%; specificity, 99.2%; precision, 84.1%; area under the curve, 0.949; and final score, 0.857. Conclusions: These results demonstrate the ability of the pretrained convolutional neural network architectures to identify ophthalmological diseases from fundus images. ResNet50 can be a good architecture to solve problems in disease detection and classification of glaucoma, cataract, hypertension, and myopia; Inceptionv3 for age-related macular degeneration, and other disease; and VGG16 for normal and diabetic retinopathy.


RESUMO Objetivo: Avaliar o desempenho de classificação de modelos ou arquiteturas de rede neural convolucional pré--treinadas usando um conjunto de dados de imagem de fundo de olho contendo oito rótulos de doenças diferentes. Métodos: Neste artigo, o conjunto de dados de reconhecimento inteligente de doenças oculares publicamente disponível foi usado para o diagnóstico de oito rótulos de doenças diferentes. O banco de dados de reconhecimento inteligente de doenças oculares tem um total de 10.000 imagens de fundo de olho de ambos os olhos de 5.000 pacientes para oito categorias que contêm rótulos saudáveis, retinopatia diabética, glaucoma, catarata, degeneração macular relacionada à idade, hipertensão, miopia, outros. Investigamos o desempenho da classificação de doenças oculares construindo três arquiteturas de rede neural convolucional pré-treinadas diferentes, incluindo os modelos VGG16, Inceptionv3 e ResNet50 com otimizador de Momento Adaptativo. Esses modelos foram implementados no Google Colab o que facilitou a tarefa sem gastar horas instalando o ambiente e suportando bibliotecas. Para avaliar a eficácia dos modelos, o conjunto de dados é dividido em 70% para treinamento, 10% para validação e os 20% restantes utilizados para teste. As imagens de treinamento foram expandidas para 10.000 imagens de fundo de olho para cada tal. Resultados: Observou-se que o modelo ResNet50 alcançou acurácia de 97,1%, sensibilidade de 78,5%, especificidade de 98,5% e precisão de 79,7% e teve a melhor área sob a curva e pontuação final para classificar a categoria da catarata (área sob a curva=0,964, final=0,903). Em contraste, o modelo VGG16 alcançou uma precisão de 96,2%, sensibilidade de 56,9%, especificidade de 99,2% e precisão de 84,1%, área sob a curva 0,949 e pontuação final de 0,857. Conclusão: Esses resultados demonstram a capacidade das arquiteturas de rede neural convolucional pré-treinadas em identificar doenças oftalmológicas a partir de imagens de fundo de olho. ResNet50 pode ser uma boa solução para resolver problemas na detecção e classificação de doenças como glaucoma, catarata, hipertensão e miopia; Inceptionv3 para degeneração macular relacionada à idade e outras doenças; e VGG16 para retinopatia normal e diabética.

3.
Chinese Journal of Clinical Thoracic and Cardiovascular Surgery ; (12): 145-152, 2024.
Article in Chinese | WPRIM | ID: wpr-1006526

ABSTRACT

@#Lung adenocarcinoma is a prevalent histological subtype of non-small cell lung cancer with different morphologic and molecular features that are critical for prognosis and treatment planning. In recent years, with the development of artificial intelligence technology, its application in the study of pathological subtypes and gene expression of lung adenocarcinoma has gained widespread attention. This paper reviews the research progress of machine learning and deep learning in pathological subtypes classification and gene expression analysis of lung adenocarcinoma, and some problems and challenges at the present stage are summarized and the future directions of artificial intelligence in lung adenocarcinoma research are foreseen.

4.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 43-49, 2024.
Article in Chinese | WPRIM | ID: wpr-1003443

ABSTRACT

Objective@#To research the effectiveness of deep learning techniques in intelligently diagnosing dental caries and periapical periodontitis and to explore the preliminary application value of deep learning in the diagnosis of oral diseases@*Methods@#A dataset containing 2 298 periapical films, including healthy teeth, dental caries, and periapical periodontitis, was used for the study. The dataset was randomly divided into 1 573 training images, 233 validation images, and 492 test images. By comparing various neural network models, the MobileNetV3 network model with better performance was selected for dental disease diagnosis, and the model was optimized by tuning the network hyperparameters. The accuracy, precision, recall, and F1 score were used to evaluate the model's ability to recognize dental caries and periapical periodontitis. Class activation map was used to visualization analyze the performance of the network model@*Results@#The algorithm achieved a relatively ideal intelligent diagnostic effect with precision, recall, and accuracy of 99.42%, 99.73%, and 99.60%, respectively, and the F1 score was 99.57% for classifying healthy teeth, dental caries, and periapical periodontitis. The visualization of the class activation maps also showed that the network model can accurately extract features of dental diseases.@*Conclusion@#The tooth lesion detection algorithm based on the MobileNetV3 network model can eliminate interference from image quality and human factors and has high diagnostic accuracy, which can meet the needs of dental medicine teaching and clinical applications.

5.
Rev. cuba. inform. méd ; 15(2)dic. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1536291

ABSTRACT

En las últimas décadas, las imágenes fotoacústicas han demostrado su eficacia en el apoyo al diagnóstico de algunas enfermedades, así como en la investigación médica, ya que a través de ellas es posible obtener información del cuerpo humano con características específicas y profundidad de penetración, desde 1 cm hasta 6 cm dependiendo en gran medida del tejido estudiado, además de una buena resolución. Las imágenes fotoacústicas son comparativamente jóvenes y emergentes y prometen mediciones en tiempo real, con procedimientos no invasivos y libres de radiación. Por otro lado, aplicar Deep Learning a imágenes fotoacústicas permite gestionar datos y transformarlos en información útil que genere conocimiento. Estas aplicaciones poseen ventajas únicas que facilitan la aplicación clínica. Se considera que con estas técnicas se pueden proporcionar diagnósticos médicos confiables. Es por eso que el objetivo de este artículo es proporcionar un panorama general de los casos donde se combina el Deep Learning con técnicas fotoacústicas.


In recent decades, photoacoustic imaging has proven its effectiveness in supporting the diagnosis of some diseases as well as in medical research, since through them it is possible to obtain information of the human body with specific characteristics and depth of penetration, from 1 cm to 6 cm depending largely on the tissue studied, in addition to a good resolution. Photoacoustic imaging is comparatively young and emerging and promises real-time measurements, with non-invasive and radiation-free procedures. On the other hand, applying Deep Learning to photoacoustic images allows managing data and transforming them into useful information that generates knowledge. These applications have unique advantages that facilitate clinical application. It may be possible with these techniques to provide reliable medical diagnoses. That is why the aim of this article is to provide an overview of cases combining Deep Learning with photoacoustic techniques.

6.
Rev. cuba. inform. méd ; 15(2)dic. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1536294

ABSTRACT

El campo de la radiología ha experimentado avances notables en las últimas décadas, con desarrollos que van desde la mejora de la calidad y digitalización de las imágenes hasta la detección asistida por computadora. Particularmente, la aparición de técnicas de Inteligencia Artificial basadas en Deep Learning y Visión Computacional han promovido soluciones innovadoras en el diagnóstico y el análisis radiológico. Se explora la relevancia de los desarrollos y modelos open source en el progreso de estas técnicas, resaltando el impacto que la colaboración y el acceso abierto han tenido en el avance científico del campo. La investigación tiene un enfoque cualitativo, con alcance descriptivo y retrospectivo, de corte longitudinal. Se realizó un análisis documental de la evolución y el impacto del open source en la Radiología, poniendo de relieve la colaboración multidisciplinar. Se examinaron casos de uso, ventajas, desafíos y consideraciones éticas en relación con la implementación de soluciones basadas en Inteligencia Artificial en Radiología. El enfoque open source ha mostrado ser una influencia positiva en la Radiología, con potencial para influir en la atención médica, ofreciendo soluciones más precisas y accesibles. No obstante, se presentan desafíos éticos y técnicos que requieren atención.


The field of radiology has seen notable advances in recent decades, with developments ranging from image quality improvement and digitization to computer-aided detection. Particularly, the emergence of Artificial Intelligence techniques based on Deep Learning and Computer Vision have promoted innovative solutions in diagnosis and radiological analysis. This article explores the relevance of open source developments and models in the progress of these techniques, highlighting the impact that collaboration and open access have had on the scientific advancement in this field. This research has a qualitative approach, with a descriptive, retrospective, longitudinal scope. A documentary analysis of the evolution and impact of open source in Radiology was carried out, highlighting multidisciplinary collaboration. Use cases, advantages, challenges and ethical considerations were also examined in relation to the implementation of AI-based solutions in Radiology. The Open Source approach has been shown to be a positive influence in Radiology, with the potential to influence medical care, offering more precise and accessible solutions. However, there are ethical and technical challenges that require attention.

7.
Radiol. bras ; 56(5): 263-268, Sept.-Oct. 2023. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1529323

ABSTRACT

Abstract Objective: To validate a deep learning (DL) model for bone age estimation in individuals in the city of São Paulo, comparing it with the Greulich and Pyle method. Materials and Methods: This was a cross-sectional study of hand and wrist radiographs obtained for the determination of bone age. The manual analysis was performed by an experienced radiologist. The model used was based on a convolutional neural network that placed third in the 2017 Radiological Society of North America challenge. The mean absolute error (MAE) and the root-mean-square error (RMSE) were calculated for the model versus the radiologist, with comparisons by sex, race, and age. Results: The sample comprised 714 examinations. There was a correlation between the two methods, with a coefficient of determination of 0.94. The MAE of the predictions was 7.68 months, and the RMSE was 10.27 months. There were no statistically significant differences between sexes or among races (p > 0.05). The algorithm overestimated bone age in younger individuals (p = 0.001). Conclusion: Our DL algorithm demonstrated potential for estimating bone age in individuals in the city of São Paulo, regardless of sex and race. However, improvements are needed, particularly in relation to its use in younger patients.


Resumo Objetivo: Validar em indivíduos paulistas um modelo de aprendizado profundo (deep learning - DL) para estimativa da idade óssea, comparando-o com o método de Greulich e Pyle. Materiais e Métodos: Estudo transversal com radiografias de mão e punho para idade óssea. A análise manual foi feita por um radiologista experiente. Foi usado um modelo baseado em uma rede neural convolucional que ficou em terceiro lugar no desafio de 2017 da Radiological Society of North America. Calcularam-se o erro médio absoluto (mean absolute error - MAE) e a raiz do erro médio quadrado (root mean-square error - RMSE) do modelo contra o radiologista, com comparações entre sexo, etnia e idade. Resultados: A amostra compreendia 714 exames. Houve correlação entre ambos os métodos com coeficiente de determinação de 0,94. O MAE das predições foi 7,68 meses e a RMSE foi 10,27 meses. Não houve diferenças estatisticamente significantes entre sexos ou raças (p > 0,05). O algoritmo superestimou a idade óssea nos mais jovens (p = 0,001). Conclusão: O nosso algoritmo de DL demonstrou potencial para estimar a idade óssea em indivíduos paulistas, independentemente do sexo e da raça. Entretanto, há necessidade de aprimoramentos, particularmente em pacientes mais jovens.

8.
Colomb. med ; 54(3)sept. 2023.
Article in English | LILACS-Express | LILACS | ID: biblio-1534290

ABSTRACT

This statement revises our earlier "WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications" (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.


Esta declaración revisa las anteriores "Recomendaciones de WAME sobre ChatGPT y Chatbots en Relation to Scholarly Publications" (20 de enero de 2023). La revisión refleja la proliferación de chatbots y su creciente uso en las publicaciones académicas en los últimos meses, así como la preocupación por la falta de autenticidad de los contenidos cuando se utilizan chatbots. Estas recomendaciones pretenden informar a los editores y ayudarles a desarrollar políticas para el uso de chatbots en los artículos sometidos en sus revistas. Su objetivo es ayudar a autores y revisores a entender cuál es la mejor manera de atribuir el uso de chatbots en su trabajo y a la necesidad de que todos los editores de revistas tengan acceso a herramientas de selección de manuscritos. En este campo en rápida evolución, seguiremos modificando estas recomendaciones a medida que se desarrollen el software y sus aplicaciones.

9.
Indian J Ophthalmol ; 2023 Aug; 71(8): 3039-3045
Article | IMSEAR | ID: sea-225176

ABSTRACT

Purpose: To analyze the efficacy of a deep learning (DL)?based artificial intelligence (AI)?based algorithm in detecting the presence of diabetic retinopathy (DR) and glaucoma suspect as compared to the diagnosis by specialists secondarily to explore whether the use of this algorithm can reduce the cross?referral in three clinical settings: a diabetologist clinic, retina clinic, and glaucoma clinic. Methods: This is a prospective observational study. Patients between 35 and 65 years of age were recruited from glaucoma and retina clinics at a tertiary eye care hospital and a physician’s clinic. Non?mydriatic fundus photography was performed according to the disease?specific protocols. These images were graded by the AI system and specialist graders and comparatively analyzed. Results: Out of 1085 patients, 362 were seen at glaucoma clinics, 341 were seen at retina clinics, and 382 were seen at physician clinics. The kappa agreement between AI and the glaucoma grader was 85% [95% confidence interval (CI): 77.55–92.45%], and retina grading had 91.90% (95% CI: 87.78–96.02%). The retina grader from the glaucoma clinic had 85% agreement, and the glaucoma grader from the retina clinic had 73% agreement. The sensitivity and specificity of AI glaucoma grading were 79.37% (95% CI: 67.30–88.53%) and 99.45 (95% CI: 98.03–99.93), respectively; DR grading had 83.33% (95 CI: 51.59–97.91) and 98.86 (95% CI: 97.35–99.63). The cross?referral accuracy of DR and glaucoma was 89.57% and 95.43%, respectively. Conclusion: DL?based AI systems showed high sensitivity and specificity in both patients with DR and glaucoma; also, there was a good agreement between the specialist graders and the AI system

10.
Medisur ; 21(4)ago. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1514578

ABSTRACT

Fundamento: la autonomía permite a los estudiantes pensar por sí mismos, con sentido crítico e independencia, tener en cuenta diferentes puntos de vista y actuar en correspondencia con ellos. Constituye un indicador necesario en el estudio de las habilidades de aprender a aprender. Objetivo: caracterizar la autonomía como indicador de las habilidades de aprender a aprender en estudiantes de medicina. Métodos: se empleó un diseño mixto de investigación del tipo explicativo secuencial. La investigación se realizó de octubre de 2021 a marzo de 2022 en la Universidad de Ciencias Médicas de Cienfuegos. La muestra no probabilística, intencionada, quedó constituida por 255 estudiantes del primer año de la carrera de Medicina. Para la recolección de información se utilizó el cuestionario que evalúa el nivel de formación de las habilidades de aprender a aprender, observaciones a actividades docentes y grupos focales. Resultados: la autonomía está presente en el 45,4 % de los estudiantes, según cuestionario. En los grupos focales algunos estudiantes reconocen presentar insuficiencias en algunos indicadores de la autonomía, lo que se corresponde con los datos obtenidos en las observaciones a las actividades docentes. Conclusiones: la autonomía como indicador clave de las habilidades de aprender a aprender en los estudiantes del primer año de la Universidad de Ciencias Médicas de Cienfuegos se caracterizó por una baja expresión en los procesos de aprendizaje de los estudiantes de medicina.


Background: autonomy allows students to think for themselves, critically and independently, take into account different points of view and act accordingly. It constitutes a necessary indicator in the study of learning-to-learn skills. Objective: to characterize autonomy as an indicator of learning-to-learn skills in medical students. Methods: a mixed research design of the sequential explanatory type was used. The research was carried out from October 2021 to March 2022 at the Cienfuegos University of Medical Sciences. The intentional, non-probabilistic sample was made up of 255 Medicine first-year students. The questionnaire that evaluates the learning to learn skills training level, observations of teaching activities and focus groups were used to collect information. Results: autonomy is present in 45.4% of the students, according to the questionnaire. In the focus groups, some students acknowledge presenting deficiencies in some autonomy indicators, which corresponds to the data obtained in the observations of teaching activities. Conclusions: autonomy as a learning to learn skills key indicator in the Cienfuegos Medical Sciences University first-year students, was characterized by a low expression in the medical students' learning processes.

11.
Indian Pediatr ; 2023 Jul; 60(7): 561-569
Article | IMSEAR | ID: sea-225442

ABSTRACT

Background: The emergence of artificial intelligence (AI) tools such as ChatGPT and Bard is disrupting a broad swathe of fields, including medicine. In pediatric medicine, AI is also increasingly being used across multiple subspecialties. However, the practical application of AI still faces a number of key challenges. Consequently, there is a requirement for a concise overview of the roles of AI across the multiple domains of pediatric medicine, which the current study seeks to address. Aim: To systematically assess the challenges, opportunities, and explainability of AI in pediatric medicine. Methodology: A systematic search was carried out on peer-reviewed databases, PubMed Central, Europe PubMed Central, and grey literature using search terms related to machine learning (ML) and AI for the years 2016 to 2022 in the English language. A total of 210 articles were retrieved that were screened with PRISMA for abstract, year, language, context, and proximal relevance to research aims. A thematic analysis was carried out to extract findings from the included studies. Results: Twenty articles were selected for data abstraction and analysis, with three consistent themes emerging from these articles. In particular, eleven articles address the current state-of-the-art application of AI in diagnosing and predicting health conditions such as behavioral and mental health, cancer, syndromic and metabolic diseases. Five articles highlight the specific challenges of AI deployment in pediatric medicines: data security, handling, authentication, and validation. Four articles set out future opportunities for AI to be adapted: the incorporation of Big Data, cloud computing, precision medicine, and clinical decision support systems. These studies collectively critically evaluate the potential of AI in overcoming current barriers to adoption. Conclusion: AI is proving disruptive within pediatric medicine and is presently associated with challenges, opportunities, and the need for explainability. AI should be viewed as a tool to enhance and support clinical decision-making rather than a substitute for human judgement and expertise. Future research should consequently focus on obtaining comprehensive data to ensure the generalizability of research findings.

12.
Article | IMSEAR | ID: sea-218822

ABSTRACT

Modern cloud computing platforms are having trouble keeping up with the enormous volume of data flow generated by crowdsourcing and the intense computational requirements posed by conventional deep learning applications. Reduced resource consumption can be achieved by edge computing. The goal of the healthcare system is to offer a dependable and well-planned solution to improve societal health. Patients will be more satisfied with their care as a result of doctors taking their medical histories into account when creating healthcare systems and providing care. As a result, the healthcare sector is getting increasingly competitive. Healthcare systems are expanding significantly, which raises issues such massive data volume, reaction time, latency, and security susceptibility. Thus, as a well- known distributed architecture, fog computing could assist in solving

13.
Journal of Forensic Medicine ; (6): 66-71, 2023.
Article in English | WPRIM | ID: wpr-984182

ABSTRACT

Bone development shows certain regularity with age. The regularity can be used to infer age and serve many fields such as justice, medicine, archaeology, etc. As a non-invasive evaluation method of the epiphyseal development stage, MRI is widely used in living age estimation. In recent years, the rapid development of machine learning has significantly improved the effectiveness and reliability of living age estimation, which is one of the main development directions of current research. This paper summarizes the analysis methods of age estimation by knee joint MRI, introduces the current research trends, and future application trend.


Subject(s)
Epiphyses/diagnostic imaging , Age Determination by Skeleton/methods , Reproducibility of Results , Magnetic Resonance Imaging/methods , Knee Joint/diagnostic imaging
14.
Chinese Journal of Orthopaedics ; (12): 72-80, 2023.
Article in Chinese | WPRIM | ID: wpr-993412

ABSTRACT

Objective:To develop a deep transfer learning method for the differential diagnosis of osteonecrosis of the femoral head (ONFH) with other common hip diseases using anteroposterior hip radiographs.Methods:Patients suffering from ONFH, DDH, and other hip diseases including primary hip osteoarthritis, non-infectious inflammatory hip disease, and femoral neck fracture treated in the First Affiliated Hospital of Guangzhou University of Chinese Medicine from January 2018 to December 2020 were enrolled in the study. A clinical data set containing anteroposterior hip radiographs of the eligible patients was created. Data augmentation by rotating and flipping images was performed to enlarge the data set, then the data set was divided equally into a training data set and a testing data set. The ResNet-152, a deep neural network model, was used in the study, but the original Batch Normalization was replaced with Transferable Normalization to construct a novel deep transfer learning model. The model was trained to distinguish ONFH and DDH from other common hip diseases using anteroposterior hip radiographs on the training data set and its classification performance was evaluated on the testing data set.Results:The clinical data set was comprised of anteroposterior hip radiographs of 1024 hips, including 542 with ONFH, 296 with DDH, and 186 with other common hip diseases (56 hips with primary osteoarthritis, 85 hips with non-infectious inflammatory osteoarthritis, 45 hips with femoral neck fracture). After data augmentation, the size of the data set multiplied to 6144. The model was trained 100 050 times in each task. Accuracy was used as the representative parameter to evaluate the performance of the model. In the binary classification task to identify ONFH, the best accuracy was 95.80%. As for the multi-classification task for classification of ONFH and DDH from other hip diseases, the best accuracy was 91.40%. The plateau of the model was observed in each task after 50 000 times of training. The mean accuracy in plateaus was 95.35% (95% CI: 95.33%, 95.37%), and 90.85% (95% CI: 90.82%, 90.87%), respectively. Conclusion:The present study proves the encouraging performance of a deep transfer learning method for the first-visit classification of ONFH, DDH, and other hip diseases using the convenient and economical anteroposterior hip radiographs.

15.
Chinese Journal of Radiation Oncology ; (6): 422-429, 2023.
Article in Chinese | WPRIM | ID: wpr-993209

ABSTRACT

Objective:To investigate the role of three-dimensional dose distribution-based deep learning model in predicting distant metastasis of head and neck cancer.Methods:Radiotherapy and clinical follow-up data of 237 patients with head and neck cancer undergoing intensity-modulated radiotherapy (IMRT) from 4 different institutions were collected. Among them, 131 patients from HGJ and CHUS institutions were used as the training set, 65 patients from CHUM institution as the validation set, and 41 patients from HMR institution as the test set. Three-dimensional dose distribution and GTV contours of 131 patients in the training set were input into the DM-DOSE model for training and then validated with validation set data. Finally, the independent test set data were used for evaluation. The evaluation content included the area under receiver operating characteristic curve (AUC), balanced accuracy, sensitivity, specificity, concordance index and Kaplan-Meier survival curve analysis.Results:In terms of prognostic prediction of distant metastasis of head and neck cancer, the DM-DOSE model based on three-dimensional dose distribution and GTV contours achieved the optimal prognostic prediction performance, with an AUC of 0.924, and could significantly distinguish patients with high and low risk of distant metastasis (log-rank test, P<0.001). Conclusion:Three-dimensional dose distribution has good predictive value for distant metastasis in head and neck cancer patients treated with IMRT, and the constructed prediction model can effectively predict distant metastasis.

16.
Chinese Journal of Radiation Oncology ; (6): 319-324, 2023.
Article in Chinese | WPRIM | ID: wpr-993194

ABSTRACT

Objective:To develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.Methods:We proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.Results:The segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.Conclusion:Automatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.

17.
Chinese Journal of Radiation Oncology ; (6): 42-47, 2023.
Article in Chinese | WPRIM | ID: wpr-993148

ABSTRACT

Objective:To investigate the pseudo-CT generation from cone beam CT (CBCT) by a deep learning method for the clinical need of adaptive radiotherapy.Methods:CBCT data from 74 prostate cancer patients collected by Varian On-Board Imager and their simulated positioning CT images were used for this study. The deformable registration was implemented by MIM software. And the data were randomly divided into the training set ( n=59) and test set ( n=15). U-net, Pix2PixGAN and CycleGAN were employed to learn the mapping from CBCT to simulated positioning CT. The evaluation indexes included mean absolute error (MAE), structural similarity index (SSIM) and peak signal to noise ratio (PSNR), with the deformed CT chosen as the reference. In addition, the quality of image was analyzed separately, including soft tissue resolution, image noise and artifacts, etc. Results:The MAE of images generated by U-net, Pix2PixGAN and CycleGAN were (29.4±16.1) HU, (37.1±14.4) HU and (34.3±17.3) HU, respectively. In terms of image quality, the images generated by U-net and Pix2PixGAN had excessive blur, resulting in image distortion; while the images generated by CycleGAN retained the CBCT image structure and improved the image quality.Conclusion:CycleGAN is able to effectively improve the quality of CBCT images, and has potential to be used in adaptive radiotherapy.

18.
Chinese Journal of Radiological Medicine and Protection ; (12): 513-517, 2023.
Article in Chinese | WPRIM | ID: wpr-993120

ABSTRACT

Objective:To investigate a time series deep learning model for respiratory motion prediction.Methods:Eighty pieces of respiratory motion data from lung cancer patients were used in this study. They were divided into a training set and a test set at a ratio of 8∶2. The Informer deep learning network was employed to predict the respiratory motions with a latency of about 600 ms. The model performance was evaluated based on normalized root mean square errors (nRMSEs) and relative root mean square errors (rRMSEs).Results:The Informer model outperformed the conventional multilayer perceptron (MLP) and long short-term memory (LSTM) models. The Informer model yielded an average nRMSE and rRMSE of 0.270 and 0.365, respectively, at a prediction time of 423 ms, and 0.380 and 0.379, respectively, at a prediction time of 615 ms.Conclusions:The Informer model performs well in the case of a longer prediction time and has potential application value for improving the effects of the real-time tracking technology.

19.
Chinese Journal of Radiological Medicine and Protection ; (12): 435-439, 2023.
Article in Chinese | WPRIM | ID: wpr-993109

ABSTRACT

Objective:To compare the effect of uPWS R15 software based on deep learning with MIM-Maestro 6.9 software based on atlas library to automatically delineate the organs at risk of prostate cancer in order to provide a reference for clinical application.Methods:The CT data of 90 prostate cancer patients admitted to the Department of Oncology Radiotherapy of the Affiliated Hospital of North Sichuan Medical College from 2018 to 2022 were retrospectively selected. Based on the uPWS R15 software developed by Shanghai United Imaging Medical Technology Company and the MIM-Maestro 6.9 software developed by Beijing Mingwei Vision Medical Software Company, the effects of uPWS and MIM software on automatic delineation of organs at risk were evaluated according to five parameters, including delineation time (T), Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD) and the mean distance to agreement (MDA).Results:The sketching time of uPWS software was less than that of MIM software. There were no significant differences in the sketching effect of femoral head and skin between the two software (all P>0.05). The delineation of right kidney ( tMDA=-3.43, zDSC=-4.03, zJSC=-4.16, P<0.05), left kidney ( tMDA=-3.87, zDSC=-4.18, zJSC=-4.41, P<0.05), small intestine ( tMDA=-8.57, zDSC=-9.99, tJSC=14.21, P<0.05) and rectum ( zMDA=-4.00, tDSC=-9.98, tJSC= 9.72, P< 0.05) except HD, was statistically different. The bladder ( z=-7.88, -9.00, -8.17, -8.74, P<0.05) and spinalcord ( z=-3.87, -4.43, 4.03, 3.05, P<0.05) were also delineated with significant differences. The DSC automatically delineated by uPWS software was >0.7, while the DSC automatically delineated by MIM software was >0.7 for all other organs at risk except small intestine and rectum. In addition, the HD, MDA and JSC values of the organs at risk (bilateral femoral head, bilateral kidneys, spinal cord, bladder, skin, rectum and small intestine) automatically delineated by uPWS software were generally better than those with MIM software. Conclusions:The uPWS software outlines better than the MIM software, but the MIM software can also be used clinically with modifications to the small bowel and rectum, saving a great deal of time in preparation for radiation therapy.

20.
Chinese Journal of Radiological Medicine and Protection ; (12): 131-137, 2023.
Article in Chinese | WPRIM | ID: wpr-993063

ABSTRACT

Objective:To synthesize non-contrast-enhanced CT images from enhanced CT images using deep learning method based on convolutional neural network, and to evaluate the similarity between synthesized non-contrast-enhanced CT images by deep learning(DL-SNCT) and plain CT images considered as gold standard subjectively and objectively, as well as to explore their potential clinical value.Methods:Thirty-four patients who underwent conventional plain scan and enhanced CT scan at the same time were enrolled. Using deep learning model, DL-SNCT images were generated from the enhanced CT images for each patient. With plain CT images as gold standard, the image quality of DL-SNCT images was evaluated subjectively. The evaluation indices included anatomical structure clarity, artifacts, noise level, image structure integrity and image deformation using a 4-point system). Paired t-test was used to compare the difference in CT values of different anatomical parts with different hemodynamics (aorta, kidney, liver parenchyma, gluteus maximus) and different liver diseases with distinct enhancement patterns (liver cancer, liver hemangioma, liver metastasis and liver cyst) between DL-SNCT images and plain CT images. Results:In subjective evaluation, the average scores of DL-SNCT images in artifact, noise, image structure integrity and image distortion were all 4 points, which were consistent with those of plain CT images ( P>0.05). However, the average score of anatomical clarity was slightly lower than that of plain CT images (3.59±0.70 vs. 4) with significant difference ( Z = -2.89, P<0.05). For different anatomical parts, the CT values of aorta and kidney in DL-SNCT images were significantly higher than those in plain CT images ( t=-12.89, -9.58, P<0.05). There was no statistical difference in the CT values of liver parenchyma and gluteus maximus between DL-SNCT images and plain CT images ( P>0.05). For liver lesions with different enhancement patterns, the CT values of liver cancer, liver hemangioma and liver metastasis in DL-SNCT images were significantly higher than those in plain CT images( t=-10.84, -3.42, -3.98, P<0.05). There was no statistical difference in the CT values of liver cysts between DL-SNCT iamges and plain CT images ( P>0.05). Conclusions:The DL-SNCT image quality as well as the CT values of some anatomical structures with simple enhancement patterns is comparable to those of plain CT images considered as gold-standard. For those anatomical structures with variable enhancement and those liver lesions with complex enhancement patterns, there is still vast space for DL-SNCT images to be improved before it can be readily used in clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL